16 research outputs found

    Space, time and motion in a multisensory world

    Get PDF
    When interacting with environmental events, humans acquire information from different senses and combine these inputs within a coherent representation of the world. The present doctoral thesis aims at investigating how humans represent space, time, and motion through auditory and visual sensory modalities. It has been widely demonstrated a predisposition of different sensory systems towards the processing of different domains of representation, with hearing that prevails in representing the time domain and vision that is the most reliable sense for processing the space domain. Given this strong link between sensory modality and domain of representation, one objective of this thesis is to deepen the knowledge of the neural organization of multisensory spatial and temporal skills in healthy adults. In addition, by using blindness as a model to unravel the role of vision in the development of spatio-temporal abilities, this thesis explores the interaction of the spatial and temporal domains in the acoustic motion perception of early blind individuals. The interplay between space and time has also been explained as the result of humans performing actions in the surrounding environment since to carry out goal-directed motor behaviors it is useful for a person to associate the spatial and temporal information of one’s target into a shared mental map. In this regard, the present project also questions how the brain processes spatio-temporal cues of external events when it comes to manually intercepting moving objects with one hand. Finally, in light of the above results, this dissertation incorporates the development of a novel portable device, named MultiTab, for the behavioral evaluation of the processing of space, time, and motor responses, through the visual and acoustic sensory modality. For the purposes of this thesis, four methodological approaches have been employed: i) electroencephalogram (EEG) technique, to explore the cortical activation associated with multisensory spatial and temporal tasks; ii) psychophysical methods, to measure the relationship between stimuli in motion and the acoustic speed perception of blind and sighted individuals; iii) motion capture techniques, to measure indices of movements during an object’s interception task; iv) design and technical-behavioral validation of a new portable device. Studies of the present dissertation indicate the following results. First, this thesis highlights an early cortical gain modulation of sensory areas that depends on the domain of representation to process, with auditory areas mainly involved in the multisensory processing of temporal inputs, and visual areas of spatial inputs. Moreover, for the spatial domain specifically, the neural modulation of visual areas is also influenced by the kind of spatial layout representing multisensory stimuli. Second, this project shows that lack of vision influences the ability to process the speed of moving sounds by altering how blind individuals make use of the sounds’ temporal features. This result suggests that visual experience in the first years of life is a crucial factor when dealing with combined spatio-temporal information. Third, data of this thesis demonstrate that typically developing individuals manually intercepting a moving object with one hand take into consideration the item’s spatio-temporal cues, by adjusting their interceptive movements according to the object’s speed. Finally, the design and validation of MultiTab show its utility in the evaluation of multisensory processing such as the manual localization of audiovisual spatialized stimuli. Overall, findings from this thesis contribute to a more in-depth picture of how the human brain represents space, time, and motion through different senses. Moreover, they provide promising implications in exploring novel technological methods for the assessment and training of these dimensions in typical and atypical populations

    Assessment of spatial reasoning in blind individuals using a haptic version of the Kohs Block Design Test

    Get PDF
    Abstract Past research investigating the spatial abilities of visually impaired people, provided conflicting results. There is thus an urgent need to develop standardized tests for the evaluation of spatial cognition when vision is absent or disrupted. To this aim, we developed a haptic version of the Kohs Block Design Test and investigated the spatial non-verbal reasoning of early blind, late blind and sighted individuals. Participants reproduced 3D printed haptic configurations by assembling blocks with different textures, within a time limit. Results showed that early blind individuals reproduced fewer haptic designs than the other two groups correctly. Instead, the assembling time of the correct responses was similar among all groups. Moreover, blindness duration (in years) did not seem to affect the correctness of the performance: no significant correlation between the two variables was observed for early and late blind participants. Since only early blind individuals display difficulties in mentally representing the haptic configurations and manipulating multiple spatial information, we conclude that early visual deprivation may affect spatial reasoning capabilities. The present study adds new insights on the role of visual experience in the development of spatial skills and represents a first step in the adaptation of standardized tests for the assessment of spatial cognitive abilities in visually impaired people

    Structural and Functional Network-Level Reorganization in the Coding of Auditory Motion Directions and Sound Source Locations in the Absence of Vision

    Get PDF
    Epub 2022 May 2hMT+/V5 is a region in the middle occipitotemporal cortex that responds preferentially to visual motion in sighted people. In cases of early visual deprivation, hMT+/V5 enhances its response to moving sounds. Whether hMT+/V5 contains information about motion directions and whether the functional enhancement observed in the blind is motion specific, or also involves sound source location, remains unsolved. Moreover, the impact of this cross-modal reorganization of hMT+/V5 on the regions typically supporting auditory motion processing, like the human planum temporale (hPT), remains equivocal. We used a combined functional and diffusion-weighted MRI approach and individual in-ear recordings to study the impact of early blindness on the brain networks supporting spatial hearing in male and female humans. Whole-brain univariate analysis revealed that the anterior portion of hMT+/V5 responded to moving sounds in sighted and blind people, while the posterior portion was selective to moving sounds only in blind participants. Multivariate decoding analysis revealed that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in hPT in the blind group. While both groups showed axis-of-motion organization in hMT+/V5 and hPT, this organization was reduced in the hPT of blind people. Diffusion-weighted MRI revealed that the strength of hMT+/V5-hPT connectivity did not differ between groups, whereas the microstructure of the connections was altered by blindness. Our results suggest that the axis-of-motion organization of hMT+/V5 does not depend on visual experience, but that congenital blindness alters the response properties of occipitotemporal networks supporting spatial hearing in the sighted.SIGNIFICANCE STATEMENT Spatial hearing helps living organisms navigate their environment. This is certainly even more true in people born blind. How does blindness affect the brain network supporting auditory motion and sound source location? Our results show that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in human planum temporale in blind relative to sighted people; and that this functional reorganization is accompanied by microstructural (but not macrostructural) alterations in their connections. These findings suggest that blindness alters cross-modal responses between connected areas that share the same computational goals.The project was funded in part by a European Research Council starting grant MADVIS (Project 337573) awarded to O.C., the Belgian Excellence of Science (EOS) program (Project 30991544) awarded to O.C., a Flagship ERA-NET grant SoundSight (FRS-FNRS PINT-MULTI R.8008.19) awarded to O.C., and by the European Union Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant Agreement No. 701250 awarded to V.O. Computational resources have been provided by the supercomputing facilities of the Université catholique de Louvain (CISM/UCL) and the Consortium des Équipements de Calcul Intensif en Fédération Wallonie Bruxelles (CÉCI) funded by the Fond de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under convention 2.5020.11 and by the Walloon Region. A.G.-A. is supported by the Wallonie Bruxelles International Excellence Fellowship and the FSR Incoming PostDoc Fellowship by Université Catholique de Louvain. O.C. is a research associate, C.B. is postdoctoral researcher, and M.R. is a research fellow at the Fond National de la Recherche Scientifique de Belgique (FRS-FNRS)

    Ubiquitous enhancement of spatial hearing in congenitally blind people

    No full text
    Vision is thought to scaffold the development of spatial abilities in the other senses.How does spatial hearing therefore develop in people lacking visual experience?We comprehensively addressed this question by investigating auditory localization abilities in congenitally blind and sighted individuals using a psychophysical minimum audible angle task exempt of sensori-motor confounds. Participants were asked to compare the relative position of two sound sources located in central and peripheral, horizontal and vertical, frontal and rear spaces. We observed unequivocal enhancement of spatial hearing abilities in congenitally blind people, irrespective of the field of space that is assessed. Our results are conclusive in demonstrating that visual experience is not a mandatory prerequisite for developing optimal spatial hearing abilities and that, in striking contrast, the lack of vision leads to ubiquitous enhancement of auditory spatial skills

    General enhancement of spatial hearing in congenitally blind people

    No full text
    Vision is thought to support the development of spatial abilities in the other senses. If this is true, how does spatial hearing develop in people lacking visual experience? We comprehensively addressed this question by investigating auditory-localization abilities in 17 congenitally blind and 17 sighted individuals using a psychophysical minimum-audible-angle task that lacked sensorimotor confounds. Participants were asked to compare the relative position of two sound sources located in central and peripheral, horizontal and vertical, or frontal and rear spaces. We observed unequivocal enhancement of spatial-hearing abilities in congenitally blind people, irrespective of the field of space that was assessed. Our results conclusively demonstrate that visual experience is not a prerequisite for developing optimal spatial-hearing abilities and that, in striking contrast, the lack of vision leads to a general enhancement of auditory-spatial skills

    Task-dependent spatial processing in the visual cortex

    No full text
    <p>To solve spatial tasks, the human brain asks for support from the visual cortices. Nonetheless, representing spatial information is not fixed but depends on the reference frames in which the spatial inputs are involved. The present study investigates how the kind of spatial representations influences the recruitment of visual areas during multisensory spatial tasks. Our study tested participants in an electroencephalography experiment involving two audio–visual (AV) spatial tasks: a spatial bisection, in which participants estimated the relative position in space of an AV stimulus in relation to the position of two other stimuli, and a spatial localization, in which participants localized one AV stimulus in relation to themselves. Results revealed that spatial tasks specifically modulated the occipital event-related potentials (ERPs) after the onset of the stimuli. We observed a greater contralateral early occipital component (50–90 ms) when participants solved the spatial bisection, and a more robust later occipital response (110–160 ms) when they processed the spatial localization. This observation suggests that different spatial representations elicited by multisensory stimuli are sustained by separate neurophysiological mechanisms.</p&gt

    Somatosensory-guided tool use modifies arm representation for action

    Get PDF
    International audienceTool-use changes both peripersonal space and body representations, with several effects being nowadays termed tool embodiment. Since somatosensation was typically accompanied by vision in most previous tool use studies, whether somatosensation alone is sufficient for tool embodiment remains unknown. Here we address this question via a task assessing arm length representation at an implicit level. Namely, we compared movement's kinematics in blindfolded healthy participants when grasping an object before and after tool-use. Results showed longer latencies and smaller peaks in the arm transport component after tool-use, consistent with an increased length of arm representation. No changes were found in the hand grip component and correlations revealed similar kinematic signatures in naturally long-armed participants. Kinematics changes did not interact with target object position, further corroborating the finding that somatosensory-guided tool use may increase the represented size of the participants' arm. Control experiments ruled out alternative interpretations based upon altered hand position sense. In addition, our findings indicate that tool-use effects are specific for the implicit level of arm representation, as no effect was observed on the explicit estimate of the forearm length. These findings demonstrate for the first time that somatosensation is sufficient for incorporating a tool that has never been seen, nor used before

    Decoding auditory space in hMT+/V5 and the planum temporale of the sighted and the blind individuals

    No full text
    The middle occipito-temporal cortex (hMT+/V5) is a region that responds preferentially to visual motion in sighted people (Tootell et al., 1995; Watson et al., 1993; Zeki et al., 1991). In case of early visual deprivation, hMT+/V5 enhances its response tuning to moving sounds (Dormal et al., 2016; Jiang et al., 2014). However, whether hMT+/V5 contains information about sound directions and whether the functional enhancement observed in the blind is motion spec

    Decoding auditory motion direction and location in hMT+/V5 and Planum Temporale of sighted and blind individuals

    No full text
    In sighted individuals, a portion of the middle occipito-temporal cortex (hMT+/V5) responds preferentially to visual motion whereas the planum temporale (PT) responds preferentially to auditory motion. In case of early visual deprivation, hMT+/V5 enhances its response tuning toward moving sounds but the impact of early blindness on the PT remains elusive. Moreover, whether hMT+/V5 contains sound direction selectivity and whether the reorganization observed in the blind is motion specific or also involves auditory localization is equivocal. We used fMRI to characterize the brain activity of sighted and early blind individuals listening to left, right, up and down moving and static sounds. To create a vivid and ecological sensation of sound location and motion, we used individual in-ear stereo recordings recorded outside the scanner, that were re-played to the participants in the scanner. Whole-brain univariate analysis revealed preferential responses to auditory motion for both sighted and blind participants in a dorsal fronto-temporo-parietal network including PT, as well as a region overlapping with the most anterior portion of hMT+/V5. Blind participants showed additional preferential response in the more posterior region of hMT+/V5. Multivariate pattern analysis revealed significant decoding of auditory motion direction in independently localized PT and hMT+/V5 in blind and sighted participants. However, classification accuracies in the blind were significantly higher in hMT+/V5 and lower in PT when compared to sighted participants. Interestingly, decoding sound location showed a similar pattern of results even if the accuracies were lower than those obtained from motion directions. Together, these results suggest that early blindness triggers enhanced tuning for auditory motion direction and auditory location in hMT+/V5 regions, which occurs in conjunction with a reduced computational involvement of PT. These results shed important lights on how sensory deprivation triggers a network-level reorganization between occipital and temporal regions typically dedicated to a specific function
    corecore